216 research outputs found

    Similarity K-d tree method for sparse point pattern matching with underlying non-rigidity

    Get PDF
    We propose a method for matching non-affinely related sparse model and data point-sets of identical cardinality, similar spatial distribution and orientation. To establish a one-to-one match, we introduce a new similarity K-dimensional tree. We construct the tree for the model set using spatial sparsity priority order. A corresponding tree for the data set is then constructed, following the sparsity information embedded in the model tree. A matching sequence between the two point sets is generated by traversing the identically structured trees. Experiments on synthetic and real data confirm that this method is applicable to robust spatial matching of sparse point-sets under moderate non-rigid distortion and arbitrary scaling, thus contributing to non-rigid point-pattern matching. © 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved

    Recognition of human periodic motion: a frequency domain approach

    Get PDF
    We present a frequency domain analysis technique for modelling and recognizing human periodic movements from moving light displays (MLDs). We model periodic motions by motion templates, that consist of a set of feature power vectors extracted from unidentified vertical component trajectories of feature points. Motion recognition is carried out in the frequency domain, by comparing an observed motion template with pre-stored templates. This method contrasts with common spatio-temporal approaches. The proposed method is demonstrated by some examples of human periodic motion recognition in MLDs

    Dynamic segment-based sparse feature-point matching in articulate motion

    Get PDF
    We propose an algorithm for identifying articulated motion. The motion is represented by a sequence of 3D sparse feature-point data. The algorithm emphasizes a self-initializing identification phase for each uninterrupted data sequence, typically at the beginning or on resumption of tracking. We combine a dynamic segment-based hierarchial identification with a inter-frame tracking strategy for efficiency and robustness. We have tested the algorithm successfully using human motion data obtained from a marker-based optical motion capture (MoCap) system

    A tutorial on motion capture driven character animation

    Get PDF
    Motion capture (MoCap) is an increasingly important technique to create realistic human motion for animation. However MoCap data are noisy, the resulting animation is often inaccurate and unrealistic without elaborate manual processing of the data. In this paper, we will discuss practical issues for MoCap driven character animation, particularly when using commercial toolkits. We highlight open topics in this field for future research. MoCap animations created in this project will be demonstrated at the conference

    Tracking a walking person using activity-guided annealed particle filtering

    Get PDF
    Tracking human pose using observations from less than three cameras is a challenging task due to ambiguity in the available image evidence. This work presents a method for tracking using a pre-trained model of activity to guidesampling within an Annealed Particle Filtering framework. The approach is an example of model-based analysis-by-synthesis and is capable of robust tracking from less than 3 cameras with reduced numbers of samples. We test the scheme on a common dataset containing ground truth mo-tion capture data and compare against quantitative results for standard Annealed Particle Filtering. We find lower ab-solute and relative error scores for both monocular and 2-camera sequences using 80% fewer particles. © 2008 IEEE

    Movement and gesture recognition using deep learning and wearable-sensor technology

    Get PDF
    Pattern recognition of time-series signals for movement and gesture analysis plays an important role in many fields as diverse as healthcare, astronomy, industry and entertainment. As a new technique in recent years, Deep Learning (DL) has made tremendous progress in computer vision and Natural Language Processing (NLP), but largely unexplored on its performance for movement and gesture recognition from noisy multi-channel sensor signals. To tackle this problem, this study was undertaken to classify diverse movements and gestures using four developed DL models: a 1-D Convolutional neural network (1-D CNN), a Recurrent neural network model with Long Short Term Memory (LSTM), a basic hybrid model containing one convolutional layer and one recurrent layer (C-RNN), and an advanced hybrid model containing three convolutional layers and three recurrent layers (3+3 C-RNN). The models will be applied on three different databases (DB) where the performances of models were compared. DB1 is the HCL dataset which includes 6 human daily activities of 30 subjects based on accelerometer and gyroscope signals. DB2 and DB3 are both based on the surface electromyography (sEMG) signal for 17 diverse movements. The evaluation and discussion for the improvements and limitations of the models were made according to the result

    Human activity tracking from moving camera stereo data

    Get PDF
    We present a method for tracking human activity using observations from a moving narrow-baseline stereo camera. Range data are computed from the disparity between stereo image pairs. We propose a novel technique for calculating weighting scores from range data given body configuration hypotheses. We use a modified Annealed Particle Filter to recover the optimal tracking candidate from a low dimensional latent space computed from motion capture data and constrained by an activity model. We evaluate the method on synthetic data and on a walking sequence recorded using a moving hand-held stereo camera

    Tracking object poses in the context of robust body pose estimates

    Get PDF
    This work focuses on tracking objects being used by humans. These objects are often small, fast moving and heavily occluded by the user. Attempting to recover their 3D position and orientation over time is a challenging research problem. To make progress we appeal to the fact that these objects are often used in a consistent way. The body poses of different people using the same object tend to have similarities, and, when considered relative to those body poses, so do the respective object poses. Our intuition is that, in the context of recent advances in body-pose tracking from RGB-D data, robust object-pose tracking during human-object interactions should also be possible. We propose a combined generative and discriminative tracking framework able to follow gradual changes in object-pose over time but also able to re-initialise object-pose upon recognising distinctive body-poses. The framework is able to predict object-pose relative to a set of independent coordinate systems, each one centred upon a different part of the body. We conduct a quantitative investigation into which body parts serve as the best predictors of object-pose over the course of different interactions. We find that while object-translation should be predicted from nearby body parts, object-rotation can be more robustly predicted by using a much wider range of body parts. Our main contribution is to provide the first object-tracking system able to estimate 3D translation and orientation from RGB-D observations of human-object interactions. By tracking precise changes in object-pose, our method opens up the possibility of more detailed computational reasoning about human-object interactions and their outcomes. For example, in assistive living systems that go beyond just recognising the actions and objects involved in everyday tasks such as sweeping or drinking, to reasoning that a person has missed sweeping under the chair or not drunk enough water today. © 2014 Elsevier B.V. All rights reserved
    • …
    corecore